Hippocampal Spatial Model for State Space Representation in Robotic Reinforcement Learning
نویسندگان
چکیده
We study reinforcement learning for cognitive navigation. The state space representation is constructed by unsupervised learning during exploration. As a result of learning, a stable representation of the continuous two-dimensional manifold in the high-dimensional input space is found. The representation consists of a population of localized overlapping place fields. This statespace coding is a biologically inspired model of place fields in the Hippocampus of the rat. Place fields are learned by extracting spatiotemporal properties of the environment from visual sensory inputs. Visual ambiguities are eliminated by taking into account self-motion signals via path integration. This solves the hiddenstate problem and provides a robust representation suitable for applying Q-learning in continuous space. Reward-based function approximation takes place to drive action units one synapse downstream from place cells. The teaching error models the dopaminergic reward-expectation signal from neurons in the brainstem. Several action modules share the same space representation and guide the robot to multiple targets. The experimental validation of the system is done by implementing it on a mobile Khepera robot.
منابع مشابه
Learning Visual Feature Spaces for Robotic Manipulation with Deep Spatial Autoencoders
Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera ima...
متن کاملGeneralization and Transfer Learning in Noise-Affected Robot Navigation Tasks
When a robot learns to solve a goal-directed navigation task with reinforcement learning, the acquired strategy can usually exclusively be applied to the task that has been learned. Knowledge transfer to other tasks and environments is a great challenge, and the transfer learning ability crucially depends on the chosen state space representation. This work shows how an agent-centered qualitativ...
متن کاملHierarchical Functional Concepts for Knowledge Transfer among Reinforcement Learning Agents
This article introduces the notions of functional space and concept as a way of knowledge representation and abstraction for Reinforcement Learning agents. These definitions are used as a tool of knowledge transfer among agents. The agents are assumed to be heterogeneous; they have different state spaces but share a same dynamic, reward and action space. In other words, the agents are assumed t...
متن کاملSituation Dependent Spatial Abstraction in Reinforcement Learning Based on Structural Knowledge
State space abstraction reduces the size of a representation by factoring out details that are not relevant for solving a task at hand. But even in abstract representations not every detail is relevant in any situation. In cases where the structure of the environment only allows for one particular action selection, all information that does not relate to the structure can be omitted. We present...
متن کاملDissertation an Echo State Model of Non-markovian Reinforcement Learning
OF DISSERTATION AN ECHO STATE MODEL OF NON-MARKOVIAN REINFORCEMENT LEARNING There exists a growing need for intelligent, autonomous control strategies that operate in real-world domains. Theoretically the state-action space must exhibit the Markov property in order for reinforcement learning to be applicable. Empirical evidence, however, suggests that reinforcement learning also applies to doma...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1999